10 research outputs found

    Assessment of a method to detect signals for updating systematic reviews.

    Get PDF
    BackgroundSystematic reviews are a cornerstone of evidence-based medicine but are useful only if up-to-date. Methods for detecting signals of when a systematic review needs updating have face validity, but no proposed method has had an assessment of predictive validity performed.MethodsThe AHRQ Comparative Effectiveness Review program had produced 13 comparative effectiveness reviews (CERs), a subcategory of systematic reviews, by 2009, 11 of which were assessed in 2009 using a surveillance system to determine the degree to which individual conclusions were out of date and to assign a priority for updating each report. Four CERs were judged to be a high priority for updating, four CERs were judged to be medium priority for updating, and three CERs were judged to be low priority for updating. AHRQ then commissioned full update reviews for 9 of these 11 CERs. Where possible, we matched the original conclusions with their corresponding conclusions in the update reports, and compared the congruence between these pairs with our original predictions about which conclusions in each CER remained valid. We then classified the concordance of each pair as good, fair, or poor. We also made a summary determination of the priority for updating each CER based on the actual changes in conclusions in the updated report, and compared these determinations with the earlier assessments of priority.ResultsThe 9 CERs included 149 individual conclusions, 84% with matches in the update reports. Across reports, 83% of matched conclusions had good concordance, and 99% had good or fair concordance. The one instance of poor concordance was partially attributable to the publication of new evidence after the surveillance signal searches had been done. Both CERs originally judged as being low priority for updating had no substantive changes to their conclusions in the actual updated report. The agreement on overall priority for updating between prediction and actual changes to conclusions was Kappa = 0.74.ConclusionsThese results provide some support for the validity of a surveillance system for detecting signals indicating when a systematic review needs updating

    Hospital fall prevention: a systematic review of implementation, components, adherence, and effectiveness.

    Get PDF
    ObjectivesTo systematically document the implementation, components, comparators, adherence, and effectiveness of published fall prevention approaches in U.S. acute care hospitals.DesignSystematic review. Studies were identified through existing reviews, searching five electronic databases, screening reference lists, and contacting topic experts for studies published through August 2011.SettingU.S. acute care hospitals.ParticipantsStudies reporting in-hospital falls for intervention groups and concurrent (e.g., controlled trials) or historic comparators (e.g., before-after studies).InterventionFall prevention interventions.MeasurementsIncidence rate ratios (IRR, ratio of fall rate postintervention or treatment group to the fall rate preintervention or control group) and ratings of study details.ResultsFifty-nine studies met inclusion criteria. Implementation strategies were sparsely documented (17% not at all) and included staff education, establishing committees, seeking leadership support, and occasionally continuous quality improvement techniques. Most interventions (81%) included multiple components (e.g., risk assessments (often not validated), visual risk alerts, patient education, care rounds, bed-exit alarms, and postfall evaluations). Fifty-four percent did not report on fall prevention measures applied in the comparison group, and 39% neither reported fidelity data nor described adherence strategies such as regular audits and feedback to ensure completion of care processes. Only 45% of concurrent and 15% of historic control studies reported sufficient data to compare fall rates. The pooled postintervention incidence rate ratio (IRR) was 0.77 (95% confidence interval = 0.52-1.12, P = .17; eight studies; I(2) : 94%). Meta-regressions showed no systematic association between implementation intensity, intervention complexity, comparator information, or adherence levels and IRR.ConclusionPromising approaches exist, but better reporting of outcomes, implementation, adherence, intervention components, and comparison group information is necessary to establish evidence on how hospitals can successfully prevent falls

    Reporting of context and implementation in studies of global health interventions: a pilot study.

    Get PDF
    BackgroundThere is an increasing push for 'evidence-based' decision making in global health policy circles. However, at present there are no agreed upon standards or guidelines for how to evaluate evidence in global health. Recent evaluations of existing evidence frameworks that could serve such a purpose have identified details of program context and project implementation as missing components needed to inform policy. We performed a pilot study to assess the current state of reporting of context and implementation in studies of global health interventions.MethodsWe identified three existing criteria sets for implementation reporting and selected from them 10 criteria potentially relevant to the needs of policy makers in global health contexts. We applied these 10 criteria to 15 articles included in the evidence base for three global health interventions chosen to represent a diverse set of advocated global health programs or interventions: household water chlorination, prevention of mother-to-child transmission of HIV, and lay community health workers to reduce child mortality. We used a good-fair-poor/none scale for the ratings.ResultsThe proportion of criteria for which reporting was poor/none ranged from 11% to 54% with an average of 30%. Eight articles had 'good' or 'fair' documentation for greater than 75% of criteria, while five articles had 'poor or none' documentation for 50% of criteria or more. Examples of good reporting were identified.ConclusionsReporting of context and implementation information in studies of global health interventions is mostly fair or poor, and highly variable. The idiosyncratic variability in reporting indicates that global health investigators need more guidance about what aspects of context and implementation to measure and how to report them. This lack of context and implementation information is a major gap in the evidence needed by global health policy makers to reach decisions

    Results on three exemplars applied to six evidence frameworks.

    No full text
    a<p>Tang et al. grade for PMTCT is due to strict rule that only interventions with relative risk (RR)>2 qualify as “strong” evidence. If this rule is flexible we would rate PMTCT as “Grade 1 Level 1 Strong” by Tang et al. categorizations.</p>b<p>Grade 2c level 2 if repeatability outside Southeast Asia is not considered acceptable.</p
    corecore